Hello All,

Need one help for Kerberized cluster. Having two Ambari-clusters. Cluster A and 
Cluster B both are Kerberized with same KDC


Use case is : Need to access the Hive data from Cluster B to Cluster A.


Action done


- Remote Cluster B Principal and keytab are provided to Cluster A. [admadmin is 
the user]

- Remote Cluster Hive metastore Principal/Keytab are provided to cluster A.

- Running the spark job on cluster A to access the data from Cluster B [Spark 
over yarn]

- Able to connect with Hive metastore of Remote cluster B by cluster A

- Now getting the error related with Hadoop-tokens [Any help or suggestion is 
appreciated]. Error logs are like this



18/10/16 20:33:55 INFO RMProxy: Connecting to ResourceManager at 
davinderrc15.c.ampool-141120.internal/10.128.15.198:8030
18/10/16 20:33:55 INFO YarnRMClient: Registering the ApplicationMaster
18/10/16 20:33:55 INFO YarnAllocator: Will request 2 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
18/10/16 20:33:55 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: 
ApplicationMaster registered as 
NettyRpcEndpointRef(spark://YarnAM@10.128.15.198:38524)
18/10/16 20:33:55 INFO YarnAllocator: Submitted 2 unlocalized container 
requests.
18/10/16 20:33:55 INFO ApplicationMaster: Started progress reporter thread with 
(heartbeat : 3000, initial allocation : 200) intervals
18/10/16 20:33:56 INFO AMRMClientImpl: Received new token for : 
davinderrc15.c.ampool-141120.internal:45454
18/10/16 20:33:56 INFO YarnAllocator: Launching container 
container_e07_1539521606680_0045_02_000002 on host 
davinderrc15.c.ampool-141120.internal for executor with ID 1
18/10/16 20:33:56 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
18/10/16 20:33:56 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
18/10/16 20:33:56 INFO ContainerManagementProtocolProxy: Opening proxy : 
davinderrc15.c.ampool-141120.internal:45454
18/10/16 20:33:57 INFO YarnAllocator: Launching container 
container_e07_1539521606680_0045_02_000003 on host 
davinderrc15.c.ampool-141120.internal for executor with ID 2
18/10/16 20:33:57 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
18/10/16 20:33:57 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
18/10/16 20:33:57 INFO ContainerManagementProtocolProxy: Opening proxy : 
davinderrc15.c.ampool-141120.internal:45454
18/10/16 20:33:59 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (10.128.15.198:39686) 
with ID 1
18/10/16 20:33:59 INFO BlockManagerMasterEndpoint: Registering block manager 
davinderrc15.c.ampool-141120.internal:36291 with 366.3 MB RAM, 
BlockManagerId(1, davinderrc15.c.ampool-141120.internal, 36291, None)
18/10/16 20:33:59 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered 
executor NettyRpcEndpointRef(spark-client://Executor) (10.128.15.198:39704) 
with ID 2
18/10/16 20:33:59 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/10/16 20:33:59 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook 
done
18/10/16 20:33:59 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir 
('file:/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/spark-warehouse').
18/10/16 20:33:59 INFO SharedState: Warehouse path is 
'file:/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/spark-warehouse'.
18/10/16 20:33:59 INFO BlockManagerMasterEndpoint: Registering block manager 
davinderrc15.c.ampool-141120.internal:45507 with 366.3 MB RAM, 
BlockManagerId(2, davinderrc15.c.ampool-141120.internal, 45507, None)
18/10/16 20:34:00 INFO HiveUtils: Initializing HiveMetastoreConnection version 
1.2.1 using Spark classes.
18/10/16 20:34:00 INFO HiveClientImpl: Attempting to login to Kerberos using 
principal: admadmin/ad...@ampool.io and keytab: 
admadmin.keytab-a796621d-bacd-47e2-bd97-077090fe8aa8
18/10/16 20:34:00 INFO UserGroupInformation: Login successful for user 
admadmin/ad...@ampool.io using keytab file 
admadmin.keytab-a796621d-bacd-47e2-bd97-077090fe8aa8
18/10/16 20:34:01 INFO metastore: Trying to connect to metastore with URI 
thrift://10.128.0.39:9083
18/10/16 20:34:01 INFO metastore: Connected to metastore.
18/10/16 20:34:01 INFO SessionState: Created local directory: 
/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/tmp/admadmin
18/10/16 20:34:01 INFO SessionState: Created local directory: 
/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/tmp/90b26140-0c51-47b0-a32e-58e42fc6e764_resources
18/10/16 20:34:01 INFO SessionState: Created HDFS directory: 
/tmp/hive/admadmin/90b26140-0c51-47b0-a32e-58e42fc6e764
18/10/16 20:34:01 INFO SessionState: Created local directory: 
/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/tmp/admadmin/90b26140-0c51-47b0-a32e-58e42fc6e764
18/10/16 20:34:01 INFO SessionState: Created HDFS directory: 
/tmp/hive/admadmin/90b26140-0c51-47b0-a32e-58e42fc6e764/_tmp_space.db
18/10/16 20:34:01 INFO HiveClientImpl: Warehouse location for Hive client 
(version 1.2.1) is 
file:/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/spark-warehouse
18/10/16 20:34:01 INFO HiveClientImpl: Attempting to login to Kerberos using 
principal: admadmin/ad...@ampool.io and keytab: 
admadmin.keytab-a796621d-bacd-47e2-bd97-077090fe8aa8
18/10/16 20:34:01 INFO UserGroupInformation: Login successful for user 
admadmin/ad...@ampool.io using keytab file 
admadmin.keytab-a796621d-bacd-47e2-bd97-077090fe8aa8
18/10/16 20:34:01 INFO SessionState: Created local directory: 
/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/tmp/3643bfd5-2400-4644-b84e-faae1b1d7cec_resources
18/10/16 20:34:01 INFO SessionState: Created HDFS directory: 
/tmp/hive/admadmin/3643bfd5-2400-4644-b84e-faae1b1d7cec
18/10/16 20:34:01 INFO SessionState: Created local directory: 
/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/tmp/admadmin/3643bfd5-2400-4644-b84e-faae1b1d7cec
18/10/16 20:34:01 INFO SessionState: Created HDFS directory: 
/tmp/hive/admadmin/3643bfd5-2400-4644-b84e-faae1b1d7cec/_tmp_space.db
18/10/16 20:34:01 INFO HiveClientImpl: Warehouse location for Hive client 
(version 1.2.1) is 
file:/hadoop/yarn/local/usercache/admadmin/appcache/application_1539521606680_0045/container_e07_1539521606680_0045_02_000001/spark-warehouse
18/10/16 20:34:01 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint
18/10/16 20:34:04 INFO AmpoolDataFrame: SaveToAmpool :: table= default_orders1, 
parameters= Map(ampool.locator.host -> davinderrc15.c.ampool-141120.internal, 
path -> default_orders1, ampool.block.size -> 1000, ampool.num.splits -> 113, 
ampool.block.format -> AMP_BYTES, ampool.table.redundancy -> 0, 
ampool.locator.port -> 10334)
18/10/16 20:34:05 INFO Utils: No cache found.. trying to create using: 
{ampool.locator.host=davinderrc15.c.ampool-141120.internal, 
path=default_orders1, ampool.block.size=1000, ampool.num.splits=113, 
ampool.block.format=AMP_BYTES, ampool.table.redundancy=0, 
ampool.locator.port=10334}
18/10/16 20:34:07 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 380.9 KB, free 397.7 MB)
18/10/16 20:34:07 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 35.8 KB, free 397.7 MB)
18/10/16 20:34:07 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
10.128.15.198:36610 (size: 35.8 KB, free: 398.1 MB)
18/10/16 20:34:07 INFO SparkContext: Created broadcast 0 from
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 236 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 236 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 237 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 237 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 238 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 238 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 239 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 239 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 240 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 240 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 241 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 241 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 242 for 
admadmin on 10.128.0.39:8020
18/10/16 20:34:08 INFO TokenCache: Got dt for 
hdfs://ar-n1-kc.c.ampool-141120.internal:8020; Kind: HDFS_DELEGATION_TOKEN, 
Service: 10.128.0.39:8020, Ident: (HDFS_DELEGATION_TOKEN token 242 for admadmin)
18/10/16 20:34:08 INFO FileInputFormat: Total input paths to process : 1
18/10/16 20:34:08 INFO SparkContext: Starting job: foreachPartition at 
MTableWriter.scala:140
18/10/16 20:34:08 INFO DAGScheduler: Got job 0 (foreachPartition at 
MTableWriter.scala:140) with 14 output partitions
18/10/16 20:34:08 INFO DAGScheduler: Final stage: ResultStage 0 
(foreachPartition at MTableWriter.scala:140)
18/10/16 20:34:08 INFO DAGScheduler: Parents of final stage: List()
18/10/16 20:34:08 INFO DAGScheduler: Missing parents: List()
18/10/16 20:34:08 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[24] at foreachPartition at MTableWriter.scala:140), which has 
no missing parents
18/10/16 20:34:08 INFO MemoryStore: Block broadcast_1 stored as values in 
memory (estimated size 22.1 KB, free 397.7 MB)
18/10/16 20:34:08 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in 
memory (estimated size 9.1 KB, free 397.7 MB)
18/10/16 20:34:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
10.128.15.198:36610 (size: 9.1 KB, free: 398.1 MB)
18/10/16 20:34:08 INFO SparkContext: Created broadcast 1 from broadcast at 
DAGScheduler.scala:1006
18/10/16 20:34:08 INFO DAGScheduler: Submitting 14 missing tasks from 
ResultStage 0 (MapPartitionsRDD[24] at foreachPartition at 
MTableWriter.scala:140) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 
4, 5, 6, 7, 8, 9, 10, 11, 12, 13))
18/10/16 20:34:08 INFO YarnClusterScheduler: Adding task set 0.0 with 14 tasks
18/10/16 20:34:08 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 
davinderrc15.c.ampool-141120.internal, executor 2, partition 0, RACK_LOCAL, 
5022 bytes)
18/10/16 20:34:08 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 
davinderrc15.c.ampool-141120.internal, executor 1, partition 1, RACK_LOCAL, 
5022 bytes)
18/10/16 20:34:09 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
davinderrc15.c.ampool-141120.internal:36291 (size: 9.1 KB, free: 366.3 MB)
18/10/16 20:34:09 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
davinderrc15.c.ampool-141120.internal:45507 (size: 9.1 KB, free: 366.3 MB)
18/10/16 20:34:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
davinderrc15.c.ampool-141120.internal:36291 (size: 35.8 KB, free: 366.3 MB)
18/10/16 20:34:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
davinderrc15.c.ampool-141120.internal:45507 (size: 35.8 KB, free: 366.3 MB)
18/10/16 20:34:10 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 
davinderrc15.c.ampool-141120.internal, executor 1, partition 2, RACK_LOCAL, 
5022 bytes)
18/10/16 20:34:10 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 
davinderrc15.c.ampool-141120.internal, executor 2, partition 3, RACK_LOCAL, 
5022 bytes)
18/10/16 20:34:10 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
davinderrc15.c.ampool-141120.internal, executor 2): java.io.IOException: Failed 
on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]; Host Details : local host is: 
"davinderrc15.c.ampool-141120.internal/10.128.15.198"; destination host is: 
"ar-n1-kc.c.ampool-141120.internal":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:785)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy15.getBlockLocations(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:272)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
        at com.sun.proxy.$Proxy16.getBlockLocations(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1238)
        at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1225)
        at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1213)
        at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:309)
        at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:274)
        at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:266)
        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1538)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:332)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:327)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:340)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
        at 
org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
        at 
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        at 
org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:251)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:250)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:720)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683)
        at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770)
        at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620)
        at org.apache.hadoop.ipc.Client.call(Client.java:1451)
        ... 56 more
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot 
authenticate via:[TOKEN, KERBEROS]
        at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
        at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
        at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595)
        at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:758)
        ... 59 more

18/10/16 20:34:10 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on 
davinderrc15.c.ampool-141120.internal, executor 1: java.io.IOException (Failed 
on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]; Host Details : local host is: 
"davinderrc15.c.ampool-141120.internal/10.128.15.198"; destination host is: 
"ar-n1-kc.c.ampool-141120.internal":8020; ) [duplicate 1]
18/10/16 20:34:10 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 4, 
davinderrc15.c.ampool-141120.internal, executor 1, partition 1, RACK_LOCAL, 
5022 bytes)


-

Reply via email to