Venki, Changing it to true yields the stack trace in the original email. I'll have to check the class path for the core-site.xml, perhaps that is the key....
Will Sent from my iPhone > On Dec 21, 2015, at 12:19 PM, Venki Korukanti <[email protected]> > wrote: > > There is one config parameter that needs to be changed in order to work > with the kerberos enabled Hive metastore. > > "hive.metastore.sasl.enabled": "true", > > For working with Kerberos HDFS, I think you need to have the core-site.xml > (containing the kerberos credentials of the NamaNode etc.) in Drill's > classpath. I haven't tested this configuration, but it is worth a try. > > Thanks > Venki > > On Tue, Dec 15, 2015 at 7:31 PM, William Witt <[email protected]> > wrote: > >> By the lack of response, I take it that Drill on a kerberized cluster has >> not been successfully implemented (accept maybe on MapR). So it looks like >> I might need to join the ranks of the Drill developer community to make >> this happen. Any pointers in the right direction would be helpful. >> >> Will >> >> Sent from my iPhone >> >>>> On Dec 12, 2015, at 4:41 PM, William Witt <[email protected]> >>> wrote: >>> >>> I’m trying to use Drill on a kerberized CDH cluster. I attempted to >> adapt the mapr directions [ >> http://doc.mapr.com/display/MapR/Configuring+Drill+to+Use+Kerberos+with+Hive+Metastore] >> to my use case, but keeping getting a stack trace from drill when enabling >> sasl: >>> >>> 17:24:27.602 [qtp1083696596-53] ERROR >> o.a.thrift.transport.TSaslTransport - SASL negotiation failure >>> javax.security.sasl.SaslException: GSS initiate failed >>> at >> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) >> ~[na:1.7.0_79] >>> at >> org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) >> [drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at >> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253) >> ~[drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at >> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) >> [drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at >> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) >> [drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at >> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) >> [drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at java.security.AccessController.doPrivileged(Native Method) >> [na:1.7.0_79] >>> at javax.security.auth.Subject.doAs(Subject.java:415) >> [na:1.7.0_79] >>> at >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >> [hadoop-common-2.7.1.jar:na] >>> at >> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) >> [drill-hive-exec-shaded-1.3.0.jar:1.3.0] >>> at >> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:364) >> [hive-metastore-1.0.0.jar:1.0.0] >>> at >> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:221) >> [hive-metastore-1.0.0.jar:1.0.0] >>> at >> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:167) >> [hive-metastore-1.0.0.jar:1.0.0] >>> at >> org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient.<init>(DrillHiveMetaStoreClient.java:134) >> [drill-storage-hive-core-1.3.0.jar:1.3.0] >>> at >> org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient.<init>(DrillHiveMetaStoreClient.java:52) >> [drill-storage-hive-core-1.3.0.jar:1.3.0] >>> at >> org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$NonCloseableHiveClientWithCaching.<init>(DrillHiveMetaStoreClient.java:306) >> [drill-storage-hive-core-1.3.0.jar:1.3.0] >>> >>> 1) Drill is being started after a kinit >>> 2) Storage plugin configured as follows returns table not found every >> time: >>> { >>> "type": "hive", >>> "enabled": true, >>> "configProps": { >>> "hive.metastore.uris": "thrift:/[REDACTED]:9083", >>> "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh", >>> "fs.default.name": "hdfs://[REDACTED]:8020/", >>> "hive.server2.enable.doAs": "false", >>> "hive.metastore.sasl.enabled": "false", >>> "hive.metastore.kerberos.principal": "hive/[REDACTED]@[REDACTED]" >>> } >>> } >>> >>> Am I missing something obvious? >>> >>> Will >>
