tgravescs commented on pull request #28788:
URL: https://github.com/apache/spark/pull/28788#issuecomment-642335688
right they could and I agree probably ideally should, but I can see
environments where that isn't necessarily easy to do (find the jars they have
to ship). Default spark builds with Hadoop and we distribute Spark with Hadoop
jars. If I run this on a cluster now it may just fail with an error I have no
idea what it is when the previous spark version ran just fine.
I bring it up mostly to make sure we thought about it and see if anyone had
strong opinions. .
I'm ok with this but think we should document it. Put it in the release
notes, we definitely need to update the running on yarn docs for default value
of spark.yarn.populateHadoopClasspath. it might not hurt to put a couple
sentences about it in the Launching Spark on YARN section.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]