使用flink sql 1.12写入hive,未提交到yarn上成功,错误信息如下:
2021-01-26 20:44:23.133 [main] INFO 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient  - Trying to connect to
metastore with URI thrift://hdcom02.prd.com:9083
2021-01-26 20:44:23.133 [main] INFO 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient  - Trying to connect to
metastore with URI thrift://hdcom02.prd.com:9083
2021-01-26 20:44:23.134 [main] INFO 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient  - Opened a connection
to metastore, current connections: 2
2021-01-26 20:44:23.134 [main] INFO 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient  - Opened a connection
to metastore, current connections: 2
2021-01-26 20:44:23.181 [main] WARN 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient  - set_ugi() not
successful, Likely cause: new client talking to old server. Continuing
without it.
org.apache.thrift.transport.TTransportException: null
        at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
        at
org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
        at
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
        at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
        at
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
        at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:4787)
        at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:4773)
        at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
        at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:224)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
        at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
        at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
        at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.flink.table.catalog.hive.client.HiveShimV310.getHiveMetastoreClient(HiveShimV310.java:112)
        at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:274)
        at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:80)
        at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:32)
        at
org.apache.flink.connectors.hive.HiveTableSink.consume(HiveTableSink.java:145)
        at
org.apache.flink.connectors.hive.HiveTableSink.lambda$getSinkRuntimeProvider$0(HiveTableSink.java:137)
        at
org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalSink.createSinkTransformation(CommonPhysicalSink.scala:95)
        at
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:109)
        at
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:43)
        at
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan(ExecNode.scala:59)
        at
org.apache.flink.table.planner.plan.nodes.exec.ExecNode.translateToPlan$(ExecNode.scala:57)
        at
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:43)
        at
org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:66)
        at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
        at scala.collection.Iterator.foreach(Iterator.scala:937)
        at scala.collection.Iterator.foreach$(Iterator.scala:937)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
        at scala.collection.IterableLike.foreach(IterableLike.scala:70)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at scala.collection.TraversableLike.map(TraversableLike.scala:233)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at
org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65)
        at
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167)
        at
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329)
        at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:676)
        at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:767)
        at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:666)
        at
cn.wesure.stream.flink.sql.FlinkSqlEngine.process(FlinkSqlEngine.java:129)
        at
cn.wesure.stream.flink.FlinkSqlEngineApp.main(FlinkSqlEngineApp.java:26)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:343)
        at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:213)
        at
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
        at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:816)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:248)
        at
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1058)
        at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)

1.12版本的lib目录如下:
-rw-r--r-- 1 dauser dauser   6322351 Jan 26 17:51
flink-connector-hive_2.12-1.12.1.jar
-rw-r--r-- 1 dauser dauser     91745 Jan 10 08:26 flink-csv-1.12.1.jar
-rw-r--r-- 1 dauser dauser 105273329 Jan 10 08:29 flink-dist_2.12-1.12.1.jar
-rw-r--r-- 1 dauser dauser    137005 Jan 10 08:25 flink-json-1.12.1.jar
-rw-r--r-- 1 dauser dauser   7709741 Jul 29 15:33
flink-shaded-zookeeper-3.4.14.jar
-rw-r--r-- 1 dauser dauser  34748023 Jan 10 08:28
flink-table_2.12-1.12.1.jar
-rw-r--r-- 1 dauser dauser  37777653 Jan 10 08:28
flink-table-blink_2.12-1.12.1.jar
-rw-rw-r-- 1 dauser dauser  40603464 Jan 26 11:43 hive-exec-3.1.0.jar
-rw-rw-r-- 1 dauser dauser    313702 Jan 26 17:43 libfb303-0.9.3.jar
-rw-r--r-- 1 dauser dauser    290339 Jan 26 11:41 logback-classic-1.2.3.jar
-rw-r--r-- 1 dauser dauser    471901 Jan 26 11:41 logback-core-1.2.3.jar

已配置kerberos认证,在1.11.1版本中能认证并提交成功到yarn上执行。
请大佬帮忙看下




--
Sent from: http://apache-flink.147419.n8.nabble.com/

回复