Github user deanchen commented on the pull request:

    https://github.com/apache/spark/pull/5586#issuecomment-96842084
  
    Yes, including the HBase jars on the driver and/or executor (eg. 
_/usr/lib/hbase/lib/hbase-client.jar:/usr/lib/hbase/lib/hbase-common.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat.jar:/usr/lib/hbase/lib/hbase-protocol.jar:/usr/lib/hbase/lib/htrace-core-2.04.jar_)
 will allow the driver and executor to reference the hbase configuration and 
create a new connection. The assumption is that the hbase jars are also in 
those same dirs on the executors. Hbase-site.xml will need to be moved in to 
/conf or in to the Spark conf path since that is where the zk config for HBase 
is contained.
    
    I've tested this on yarn-client and yarn-cluster on our secure production 
cluster with hbase 0.98 with and without the hbase jars included. And also in 
HDP sandbox with hbase 0.98 with a unsecured hbase connection(all running 
locally). 
    
    Updated the pull request to remove _throw new RuntimeException_ on line 
1117 and log as an error since users may be running a secure YARN cluster 
without security on HBase. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to