ywww commented on issue #3127:
URL: https://github.com/apache/iceberg/issues/3127#issuecomment-995717883


   hive3.1.2  the same proplem
   
   -----------------------------------------
   
   hive> drop table test.hive_iceberg_test1;
   OK
   Time taken: 2.002 seconds
   hive> CREATE TABLE test.hive_iceberg_test1 (
       >   id bigint, 
       >   name string
       > ) PARTITIONED BY (
       >   ds string
       > ) 
       > STORED BY 'org.apache.iceberg.mr.hive.HiveIcebergStorageHandler';
   OK
   Time taken: 1.229 seconds
   hive> insert into table test.hive_iceberg_test1 select 1,'1',1;;
   Query ID = root_20211216194143_6388a8d2-9b42-47a5-8685-587d43d0eaaf
   Total jobs = 1
   Launching Job 1 out of 1
   Number of reduce tasks is set to 0 since there's no reduce operator
   Starting Job = job_1639039128241_0038, Tracking URL = 
http://emr-header-1.cluster-267641:20888/proxy/application_1639039128241_0038/
   Kill Command = /usr/local/complat/adp/hadoop/bin/mapred job  -kill 
job_1639039128241_0038
   Hadoop job information for Stage-2: number of mappers: 1; number of 
reducers: 0
   2021-12-16 19:41:56,284 Stage-2 map = 0%,  reduce = 0%
   2021-12-16 19:42:03,664 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 
2.62 sec
   MapReduce Total cumulative CPU time: 2 seconds 620 msec
   Ended Job = job_1639039128241_0038 with errors
   Error during job, obtaining debugging information...
   Job Tracking URL: 
http://emr-header-1.cluster-267641:20888/proxy/application_1639039128241_0038/
   Examining task ID: task_1639039128241_0038_m_000000 (and more) from job 
job_1639039128241_0038
   FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
   MapReduce Jobs Launched: 
   Stage-Stage-2: Map: 1   Cumulative CPU: 2.62 sec   HDFS Read: 178200 HDFS 
Write: 3484 FAIL
   Total MapReduce CPU Time Spent: 2 seconds 620 msec
   -------------------------------------------
   
   
   2021-12-16 19:42:03,687 INFO [CommitterEvent Processor #1] 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient: HMSC::open(): Could not 
find delegation token. Creating KERBEROS-based thrift connection.
   2021-12-16 19:42:03,718 ERROR [CommitterEvent Processor #1] 
org.apache.thrift.transport.TSaslTransport: SASL negotiation failure
   javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
           at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
           at 
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
           at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
           at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
           at 
org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:51)
           at 
org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:48)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1732)
           at 
org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport.open(TUGIAssumingTransport.java:48)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:516)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:224)
           at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:137)
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at 
org.apache.iceberg.common.DynConstructors$Ctor.newInstanceChecked(DynConstructors.java:60)
           at 
org.apache.iceberg.common.DynConstructors$Ctor.newInstance(DynConstructors.java:73)
           at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:53)
           at 
org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:32)
           at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:118)
           at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:49)
           at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:76)
           at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:181)
           at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:94)
           at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:77)
           at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:93)
           at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:115)
           at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:105)
           at 
org.apache.iceberg.mr.hive.HiveIcebergOutputCommitter.commitTable(HiveIcebergOutputCommitter.java:280)
           at 
org.apache.iceberg.mr.hive.HiveIcebergOutputCommitter.lambda$commitJob$2(HiveIcebergOutputCommitter.java:193)
           at 
org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:405)
           at 
org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:214)
           at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:198)
           at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:190)
           at 
org.apache.iceberg.mr.hive.HiveIcebergOutputCommitter.commitJob(HiveIcebergOutputCommitter.java:188)
           at 
org.apache.hadoop.mapred.OutputCommitter.commitJob(OutputCommitter.java:291)
           at 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:286)
           at 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:238)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)
           at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:162)
           at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
           at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:189)
           at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
           at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
           at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
           at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to