Hi Anupama

To me it looks like issue with the SPN with which you are trying to connect
to hive2 , i.e. hive@hostname.

Are you able to connect to hive from spark-shell?

Try getting the tkt using any other user keytab but not hadoop services
keytab and then try running the spark submit.


Thanks

Deepak

On 17 Sep 2016 12:23 am, <anupama.gangad...@daimler.com> wrote:

> Hi,
>
>
>
> I am trying to connect to Hive from Spark application in Kerborized
> cluster and get the following exception.  Spark version is 1.4.1 and Hive
> is 1.2.1. Outside of spark the connection goes through fine.
>
> Am I missing any configuration parameters?
>
>
>
> ava.sql.SQLException: Could not open connection to
> jdbc:hive2://<hiveserver2 ADDRESS>10001/default;principal=hive/<hive
> server2 host>;ssl=false;transportMode=http;httpPath=cliservice: null
>
>                at org.apache.hive.jdbc.HiveConne
> ction.openTransport(HiveConnection.java:206)
>
>                at org.apache.hive.jdbc.HiveConne
> ction.<init>(HiveConnection.java:178)
>
>                at org.apache.hive.jdbc.HiveDrive
> r.connect(HiveDriver.java:105)
>
>                at java.sql.DriverManager.getConn
> ection(DriverManager.java:571)
>
>                at java.sql.DriverManager.getConn
> ection(DriverManager.java:215)
>
>                at SparkHiveJDBCTest$1.call(SparkHiveJDBCTest.java:124)
>
>                at SparkHiveJDBCTest$1.call(SparkHiveJDBCTest.java:1)
>
>                at org.apache.spark.api.java.Java
> PairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
>
>                at scala.collection.Iterator$$ano
> n$11.next(Iterator.scala:328)
>
>                at scala.collection.Iterator$$ano
> n$11.next(Iterator.scala:328)
>
>                at org.apache.spark.rdd.PairRDDFu
> nctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$
> apply$6.apply$mcV$sp(PairRDDFunctions.scala:1109)
>
>                at org.apache.spark.rdd.PairRDDFu
> nctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(
> PairRDDFunctions.scala:1108)
>
>                at org.apache.spark.rdd.PairRDDFu
> nctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(
> PairRDDFunctions.scala:1108)
>
>                at org.apache.spark.util.Utils$.t
> ryWithSafeFinally(Utils.scala:1285)
>
>                at org.apache.spark.rdd.PairRDDFu
> nctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(Pai
> rRDDFunctions.scala:1116)
>
>                at org.apache.spark.rdd.PairRDDFu
> nctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(Pai
> rRDDFunctions.scala:1095)
>
>                at org.apache.spark.scheduler.Res
> ultTask.runTask(ResultTask.scala:63)
>
>                at org.apache.spark.scheduler.Task.run(Task.scala:70)
>
>                at org.apache.spark.executor.Exec
> utor$TaskRunner.run(Executor.scala:213)
>
>                at java.util.concurrent.ThreadPoo
> lExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                at java.util.concurrent.ThreadPoo
> lExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.thrift.transport.TTransportException
>
>                at org.apache.thrift.transport.TI
> OStreamTransport.read(TIOStreamTransport.java:132)
>
>                at org.apache.thrift.transport.TT
> ransport.readAll(TTransport.java:84)
>
>                at org.apache.thrift.transport.TS
> aslTransport.receiveSaslMessage(TSaslTransport.java:182)
>
>                at org.apache.thrift.transport.TS
> aslTransport.open(TSaslTransport.java:258)
>
>                at org.apache.thrift.transport.TS
> aslClientTransport.open(TSaslClientTransport.java:37)
>
>                at org.apache.hadoop.hive.thrift.
> client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>
>                at org.apache.hadoop.hive.thrift.
> client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>
>                at java.security.AccessController.doPrivileged(Native
> Method)
>
>                at javax.security.auth.Subject.doAs(Subject.java:415)
>
>                at org.apache.hadoop.security.Use
> rGroupInformation.doAs(UserGroupInformation.java:1657)
>
>                at org.apache.hadoop.hive.thrift.
> client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>
>                at org.apache.hive.jdbc.HiveConne
> ction.openTransport(HiveConnection.java:203)
>
>                ... 21 more
>
>
>
> In spark conf directory hive-site.xml has the following properties
>
>
>
> <configuration>
>
>
>
>     <property>
>
>       <name>hive.metastore.kerberos.keytab.file</name>
>
>       <value>/etc/security/keytabs/hive.service.keytab</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.metastore.kerberos.principal</name>
>
>       <value>hive/_HOST@<DOMAIN></value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.metastore.sasl.enabled</name>
>
>       <value>true</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.metastore.uris</name>
>
>       <value>thrift://<Hiveserver2 address>:9083</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.server2.authentication</name>
>
>       <value>KERBEROS</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.server2.authentication.kerberos.keytab</name>
>
>       <value>/etc/security/keytabs/hive.service.keytab</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.server2.authentication.kerberos.principal</name>
>
>       <value>hive/_HOST@<DOMAIN></value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.server2.authentication.spnego.keytab</name>
>
>       <value>/etc/security/keytabs/spnego.service.keytab</value>
>
>     </property>
>
>
>
>     <property>
>
>       <name>hive.server2.authentication.spnego.principal</name>
>
>       <value>HTTP/_HOST@<DOMAIN></value>
>
>     </property>
>
>
>
>   </configuration>
>
>
>
> --Thank you
>
> If you are not the addressee, please inform us immediately that you have
> received this e-mail by mistake, and delete it. We thank you for your
> support.
>
>

Reply via email to