Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2982#issuecomment-68906274
HI @pwendell, thanks for taking a look.
My reasoning for adding a new env variable for the distribution's class
path is twofold:
- It's not a user option; users are not expected to set it. This is to be
set by Spark distributions and never be changed. Of course users can go and
modify it just like they can modify other things which are not supposed to be
modified, but that's not a supported scenario.
- It works differently from `spark.{driver,executor}.extraClassPath`. Those
options prepend entries to Spark's classpath. The new env variable is appended
to the classpath. This makes a huge difference: e.g., if you prepend
`/usr/lib/hadoop/lib/*` to Spark's classpath you could break Spark, because
that directory contains older versions of some libraries Spark uses.
Also, another advantage of not overloading the existing option for this is
that you leave that option available to users without them worrying about
breaking the distribution's settings.
About the testing classpath, I'll take a second look. Worst case I think I
can piggyback on the dist classpath env variable instead of creating a separate
one. I just wanted something that did not require me to change every single
test (which would potentially be the case if I choose to use the
`spark.*.extraClassPath` route).
I'm currently working on some other changes but I'll try to get back to
this by the end of the day.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]