Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/9553#issuecomment-165249868
The use case here is that I want to build one common spark that is used
across many different clusters. Those clusters may not have Hive running yet,
or perhaps I just don't want to have to specify hive-site.xml and include
hcatalog, etc. If I'm not using Hive then I don't need spark-shell to load it
for me. Perhaps I'm trying to use Spark and they take Hive down to do
maintenance, now I can't run Spark or I get error messages out.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]