tgravescs commented on pull request #28376:
URL: https://github.com/apache/spark/pull/28376#issuecomment-620625239


   overall  I think the change is fine. You could end up having issues if the 
user tried the external shuffle service and things weren't compatible but that 
is problem with anyone trying to run multiple versions.  obviously requires 
them to have all the dependencies they need.
   
   @viirya  its ok to have multiple versions of spark on a cluster and many 
times there are just dependency issues with the different Hadoop versions. It 
will be rpc compatible but you run into different versions of dependencies 
(guava, jetty, etc). 
   
   The other way to possible solve this is to make sure these versions are on 
the class path first, but that can get tricky.
   
   Is the standard spark package that includes Hadoop sufficient to run for you 
then @dbtsai ?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to