Kylin supports Kerberos authentication, and it doesn't need any
code/configuration change in Kylin side.

Since Kylin is working as a Hadoop client, connecting with cluster in
standard ways, you just need prepare a client machine, from which
hive/yarn/hbase command lines normally, then Kylin will work. Remember as
the Kerberos token will expire, you need a cron job which repeatedly
refresh the token.

2017-05-19 15:10 GMT+08:00 Billy Liu <[email protected]>:

> I think so. Have you give the proper kerberos token to the current user?
>
> 2017-05-18 17:10 GMT+08:00 ran gabriele <[email protected]>:
>
> > Hello,
> >
> >
> > I am using kylin 2.0.0 for CDH 5.7/5.8. My cluster is configured with
> > kerberos as certification.
> >
> >
> > Here I got the error log.
> >
> >
> > 17/05/17 17:25:16 WARN ipc.RpcClientImpl: Exception encountered while
> > connecting to the server : javax.security.sasl.SaslException: GSS
> > initiate failed [Caused by GSSException: No valid credentials provided
> > (Mechanism level: Failed to find any Kerberos tgt)]
> > 17/05/17 17:25:16 ERROR ipc.RpcClientImpl: SASL authentication failed.
> The
> > most likely cause is missing or invalid credentials. Consider 'kinit'.
> > javax.security.sasl.SaslException: GSS initiate failed [Caused by
> > GSSException: No valid credentials provided (Mechanism level: Failed to
> > find any Kerberos tgt)]
> >     at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(
> > GssKrb5Client.java:211)
> >     at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(
> > HBaseSaslRpcClient.java:181)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.
> > setupSaslConnection(RpcClientImpl.java:617)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.
> > access$700(RpcClientImpl.java:162)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.
> > run(RpcClientImpl.java:743)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.
> > run(RpcClientImpl.java:740)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:422)
> >     at org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGroupInformation.java:1628)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.
> > setupIOstreams(RpcClientImpl.java:740)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.
> > writeRequest(RpcClientImpl.java:906)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.
> > tracedWriteRequest(RpcClientImpl.java:873)
> >     at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(
> > RpcClientImpl.java:1242)
> >     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
> > AbstractRpcClient.java:227)
> >     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$
> > BlockingRpcChannelImplementation.callBlockingMethod(
> > AbstractRpcClient.java:336)
> >     at org.apache.hadoop.hbase.protobuf.generated.
> > ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.
> > openScanner(ScannerCallable.java:394)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.call(
> > ScannerCallable.java:203)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.call(
> > ScannerCallable.java:64)
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> > callWithoutRetries(RpcRetryingCaller.java:200)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
> > RetryingRPC.call(ScannerCallableWithReplicas.java:381)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
> > RetryingRPC.call(ScannerCallableWithReplicas.java:355)
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:126)
> >     at org.apache.hadoop.hbase.client.ResultBoundedCompletionService
> > $QueueingFuture.run(ResultBoundedCompletionService.java:80)
> >     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1142)
> >     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:617)
> >     at java.lang.Thread.run(Thread.java:745)
> > Caused by: GSSException: No valid credentials provided (Mechanism level:
> > Failed to find any Kerberos tgt)
> >     at sun.security.jgss.krb5.Krb5InitCredential.getInstance(
> > Krb5InitCredential.java:147)
> >     at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(
> > Krb5MechFactory.java:122)
> >     at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(
> > Krb5MechFactory.java:187)
> >     at sun.security.jgss.GSSManagerImpl.getMechanismContext(
> > GSSManagerImpl.java:224)
> >     at sun.security.jgss.GSSContextImpl.initSecContext(
> > GSSContextImpl.java:212)
> >     at sun.security.jgss.GSSContextImpl.initSecContext(
> > GSSContextImpl.java:179)
> >     at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(
> > GssKrb5Client.java:192)
> >     ... 26 more
> > 17/05/17 17:25:16 ERROR persistence.ResourceStore: Create new store
> > instance failed
> > java.lang.reflect.InvocationTargetException
> >     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >     at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> > NativeConstructorAccessorImpl.java:62)
> >     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> > DelegatingConstructorAccessorImpl.java:45)
> >     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> >     at org.apache.kylin.common.persistence.ResourceStore.
> > createResourceStore(ResourceStore.java:91)
> >     at org.apache.kylin.common.persistence.ResourceStore.
> > getStore(ResourceStore.java:110)
> >     at org.apache.kylin.cube.CubeDescManager.getStore(
> > CubeDescManager.java:370)
> >     at org.apache.kylin.cube.CubeDescManager.reloadAllCubeDesc(
> > CubeDescManager.java:298)
> >     at org.apache.kylin.cube.CubeDescManager.<init>(
> > CubeDescManager.java:109)
> >     at org.apache.kylin.cube.CubeDescManager.getInstance(
> > CubeDescManager.java:81)
> >     at org.apache.kylin.cube.CubeInstance.getDescriptor(
> > CubeInstance.java:109)
> >     at org.apache.kylin.cube.CubeSegment.getCubeDesc(
> CubeSegment.java:119)
> >     at org.apache.kylin.cube.CubeSegment.isEnableSharding(
> > CubeSegment.java:467)
> >     at org.apache.kylin.cube.kv.RowKeyEncoder.<init>(
> > RowKeyEncoder.java:48)
> >     at org.apache.kylin.cube.kv.AbstractRowKeyEncoder.createInstance(
> > AbstractRowKeyEncoder.java:48)
> >     at org.apache.kylin.engine.spark.SparkCubingByLayer$2.call(
> > SparkCubingByLayer.java:205)
> >     at org.apache.kylin.engine.spark.SparkCubingByLayer$2.call(
> > SparkCubingByLayer.java:193)
> >     at org.apache.spark.api.java.JavaPairRDD$$anonfun$
> > pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
> >     at org.apache.spark.api.java.JavaPairRDD$$anonfun$
> > pairFunToScalaFun$1.apply(JavaPairRDD.scala:1018)
> >     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> >     at org.apache.spark.util.collection.ExternalSorter.
> > insertAll(ExternalSorter.scala:191)
> >     at org.apache.spark.shuffle.sort.SortShuffleWriter.write(
> > SortShuffleWriter.scala:64)
> >     at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> > ShuffleMapTask.scala:73)
> >     at org.apache.spark.scheduler.ShuffleMapTask.runTask(
> > ShuffleMapTask.scala:41)
> >     at org.apache.spark.scheduler.Task.run(Task.scala:89)
> >     at org.apache.spark.executor.Executor$TaskRunner.run(
> > Executor.scala:227)
> >     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1142)
> >     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:617)
> >     at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.IllegalArgumentException: File not exist by
> > 'kylin_metadata@hbase': /mnt/disk2/yarn/nm/usercache/
> > kylin/appcache/application_1493867056374_0598/container_
> > e21_1493867056374_0598_01_000002/kylin_metadata@hbase
> >     at org.apache.kylin.common.persistence.FileResourceStore.
> > <init>(FileResourceStore.java:49)
> >     ... 29 more
> >
> >
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to