Hi

> 另外,HbaseTableSource 有没有计划什么时候支持 SupportsFilterPushDown.
我搜了下社区还没相关的issue,如果是强需求你可以去社区建个issue[1],让社区支持下
第二个异常栈,如果确认”org.apache.hive:hive-hbase-handler:2.1.1” 已经加载,我感觉是个bug, cc Rui Li 
确认下 

祝好
Leonard
[1] https://issues.apache.org/jira/projects/FLINK/summary 
<https://issues.apache.org/jira/projects/FLINK/summary>
> 
> 关于"select * from hive_hbase_t1"的异常日志如下。
> 
> 
> Flink SQL> select * from hive_hbase_t1;
> 2020-08-28 13:20:19,985 WARN  org.apache.hadoop.hive.conf.HiveConf            
>             
> [] - HiveConf of name hive.vectorized.use.checked.expressions does not exist
> 2020-08-28 13:20:19,985 WARN  org.apache.hadoop.hive.conf.HiveConf            
>             
> [] - HiveConf of name hive.strict.checks.no.partition.filter does not exist
> 2020-08-28 13:20:19,985 WARN  org.apache.hadoop.hive.conf.HiveConf            
>             
> [] - HiveConf of name hive.strict.checks.orderby.no.limit does not exist
> 2020-08-28 13:20:19,985 WARN  org.apache.hadoop.hive.conf.HiveConf            
>             
> [] - HiveConf of name hive.vectorized.input.format.excludes does not exist
> 2020-08-28 13:20:19,986 WARN  org.apache.hadoop.hive.conf.HiveConf            
>             
> [] - HiveConf of name hive.strict.checks.bucketing does not exist
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.runtime.rest.util.RestClientException: [Internal server
> error., <Exception on server side:
> org.apache.flink.runtime.client.JobSubmissionException: Failed to submit
> job.
>       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:344)
>       at
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
>       at
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>       at
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>       at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>       at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>       at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>       at
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>       at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>       at
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not
> instantiate JobManager.
>       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:398)
>       at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>       ... 6 more
> Caused by: org.apache.flink.runtime.JobException: Creating the input splits
> caused an error: Unable to instantiate the hadoop input format
>       at
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.&lt;init>(ExecutionJobVertex.java:272)
>       at
> org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:814)
>       at
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
>       at
> org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:269)
>       at
> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:242)
>       at
> org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:229)
>       at
> org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:119)
>       at
> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:103)
>       at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:284)
>       at 
> org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:272)
>       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
>       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
>       at
> org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:140)
>       at
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
>       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:388)
>       ... 7 more
> Caused by: org.apache.flink.connectors.hive.FlinkHiveException: Unable to
> instantiate the hadoop input format
>       at
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:307)
>       at
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:282)
>       at
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:66)
>       at
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:258)
>       ... 21 more
> Caused by: java.lang.NullPointerException
>       at java.lang.Class.forName0(Native Method)
>       at java.lang.Class.forName(Class.java:348)
>       at
> org.apache.flink.connectors.hive.read.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:305)
>       ... 24 more
> 
> End of exception on server side>]
> 
> Flink SQL> 
> 
> 
> 
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/

回复