Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3238#issuecomment-65140058
Hey @jimjh the overall approach makes sense. However I think having a
config that is explicitly only for data nucleus jars is a little too specific.
Is it possible to generalize this in any way? Currently we already have
`spark.yarn.secondary.jars`. IIUC the reason why we can't just use this
directly is that this adds the jars to the user application's class path, not
Spark's. I wonder if there's an easy way to generalize that. @vanzin any
thoughts?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]