Hi,Roc Marshal:
????????????????????????????????????????

Best,
MuChen.




------------------ ???????? ------------------
??????:&nbsp;"Roc Marshal"<flin...@126.com&gt;;
????????:&nbsp;2020??6??23??(??????) ????10:27
??????:&nbsp;"user-zh"<user-zh@flink.apache.org&gt;;

????:&nbsp;Re:flinksql????hbase??????????



MuChen????????<br/&gt;??????????????????????????????????HBase??zk??????meta??????Flink??????????Hbase
 
Source????????????????????????????????????????????zk????????????????????????<br/&gt;??????????????????????<br/&gt;<br/&gt;Best,<br/&gt;Roc
 Marshal.
?? 2020-06-23 10:17:35??"MuChen" <9329...@qq.com&gt; ??????
&gt;Hi, All:
&gt;
&gt;
&gt;??????????????????????flinksql????????hbase????????????????
&gt;
&gt;
&gt;????????????????????????????
&gt;
&gt;
&gt;
&gt;????????hadoop??????master????????????flink??
&gt;
&gt;????????????????yarn-session:
&gt;bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 
2&amp;gt;&amp;amp;1 &amp;amp; # ?????????????????????????????????? 
[admin@uhadoop-op3raf-master2 flink10]$ 2020-06-23 09:30:56,402 ERROR 
org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - 
Authentication failed 2020-06-23 09:30:56,515 ERROR 
org.apache.flink.shaded.curator.org.apache.curator.ConnectionState&nbsp; - 
Authentication failed JobManager Web Interface: 
http://uhadoop-op3raf-core24:42976 
&gt;????????????sql-client:
&gt;bin/sql-client.sh embedded 
&gt;????????????????????hbase????????????flinksql??????????????
&gt;#&nbsp; CREATE TABLE hbase_video_pic_title_q70 (&nbsp;&nbsp; key 
string,&nbsp;&nbsp; cf1 ROW<vid string, q70 string&amp;gt; ) WITH (&nbsp;&nbsp; 
'connector.type' = 'hbase',&nbsp;&nbsp; 'connector.version' = 
'1.4.3',&nbsp;&nbsp; 'connector.table-name' = 
'hbase_video_pic_title_q70',&nbsp;&nbsp; 'connector.zookeeper.quorum' = 
'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',&nbsp;&nbsp;
 'connector.zookeeper.znode.parent' = '/hbase',&nbsp;&nbsp; 
'connector.write.buffer-flush.max-size' = '10mb',&nbsp;&nbsp; 
'connector.write.buffer-flush.max-rows' = '1000',&nbsp;&nbsp;&nbsp; 
'connector.write.buffer-flush.interval' = '2s' ); 
&gt;??????????????????????
&gt;select key from hbase_video_pic_title_q70; 
&gt;??????????????????????HBase????????????
&gt;[ERROR] Could not execute SQL statement. Reason: 
org.apache.flink.runtime.rest.util.RestClientException: [Internal server 
error., <Exception on server side: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
job.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
Caused by: java.lang.RuntimeException: 
org.apache.flink.runtime.client.JobExecutionException: Could not set up 
JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 6 more Caused by: org.apache.flink.runtime.client.JobExecutionException: 
Could not set up JobManager&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:152)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 7 more Caused by: org.apache.flink.runtime.client.JobExecutionException: 
Cannot initialize task 'Source: HBaseTableSource[schema=[key, cf1], 
projectFields=[0]] -&amp;gt; 
SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70,
 source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], 
fields=[key]) -&amp;gt; SinkConversionToTuple2 -&amp;gt; Sink: SQL Client 
Stream Collect Sink': Configuring the input format (null) failed: Cannot create 
connection to HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.scheduler.SchedulerBase.<init&amp;gt;(SchedulerBase.java:215)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.<init&amp;gt;(DefaultScheduler.java:120)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobmaster.JobMaster.<init&amp;gt;(JobMaster.java:266)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&amp;gt;(JobManagerRunnerImpl.java:146)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 10 more Caused by: java.lang.Exception: Configuring the input format 
(null) failed: Cannot create connection to 
HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 20 more Caused by: java.lang.RuntimeException: Cannot create connection to 
HBase.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 21 more Caused by: java.io.IOException: 
java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 23 more Caused by: 
java.lang.reflect.InvocationTargetException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
java.lang.reflect.Constructor.newInstance(Constructor.java:423)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 26 more Caused by: java.lang.NoSuchFieldError: 
HBASE_CLIENT_PREFETCH&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:713)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&amp;gt;(ConnectionManager.java:652)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 ... 31 more End of exception on server side&amp;gt;] 
&gt;flink??????1.10.0
&gt;
&gt;hbase??????1.2.0
&gt;
&gt;hbase??????????????hadoop????????
&gt;
&gt;hbase????hive????????????hbase????????????????????
&gt;# hive?? CREATE TABLE 
edw.hbase_video_pic_title_q70(&nbsp;&nbsp;&nbsp;&nbsp; key 
string,&nbsp;&nbsp;&nbsp;&nbsp; vid string,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; q70 
string )&nbsp; STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") 
TBLPROPERTIES ("hbase.table.name" = "hbase_video_pic_title_q70"); # 
??????????hive????????????hbase INSERT OVERWRITE TABLE 
edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM dw.video_pic_title_q70; 
&gt;flink/lib??????jar:
&gt;flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar 
flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar 
flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar 
flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar 
hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar 
jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar 
slf4j-log4j12-1.7.15.jar 
&gt;????????flink-conf.yaml:
&gt;jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 
jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m 
taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: 
zookeeper high-availability.storageDir: hdfs:///flink/ha/ 
high-availability.zookeeper.quorum: 
uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 
state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: 
hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 
state.backend.incremental: true jobmanager.execution.failover-strategy: region 
jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 
8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ 
historyserver.archive.fs.refresh-interval: 10000 
metrics.reporter.influxdb.class: 
org.apache.flink.metrics.influxdb.InfluxdbReporter 
metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 
8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: 
flink metrics.reporter.influxdb.password: flink*** 
metrics.reporter.promgateway.class: 
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 
metrics.reporter.promgateway.host: 10.42.63.116 
metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: 
tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true 
metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: 
true yarn.application-attempts: 1 
&gt;????????sql-client-defaults.yaml:
&gt;tables: [] # empty list functions: [] # empty list catalogs: [] # empty 
list execution:&nbsp;&nbsp; # select the implementation responsible for 
planning table programs&nbsp;&nbsp; # possible values are 'blink' (used by 
default) or 'old'&nbsp;&nbsp; planner: blink&nbsp;&nbsp; # 'batch' or 
'streaming' execution&nbsp;&nbsp; type: streaming&nbsp;&nbsp; # allow 
'event-time' or only 'processing-time' in sources&nbsp;&nbsp; 
time-characteristic: event-time&nbsp;&nbsp; # interval in ms for emitting 
periodic watermarks&nbsp;&nbsp; periodic-watermarks-interval: 200&nbsp;&nbsp; # 
'changelog' or 'table' presentation of results&nbsp;&nbsp; result-mode: 
table&nbsp;&nbsp; # maximum number of maintained rows in 'table' presentation 
of results&nbsp;&nbsp; max-table-result-rows: 1000000&nbsp;&nbsp; # parallelism 
of the program&nbsp;&nbsp; parallelism: 1&nbsp;&nbsp; # maximum 
parallelism&nbsp;&nbsp; max-parallelism: 128&nbsp;&nbsp; # minimum idle state 
retention in ms&nbsp;&nbsp; min-idle-state-retention: 0&nbsp;&nbsp; # maximum 
idle state retention in ms&nbsp;&nbsp; max-idle-state-retention: 0&nbsp;&nbsp; 
# current catalog ('default_catalog' by default)&nbsp;&nbsp; current-catalog: 
default_catalog&nbsp;&nbsp; # current database of the current catalog (default 
database of the catalog by default)&nbsp;&nbsp; current-database: 
default_database&nbsp;&nbsp; # controls how table programs are restarted in 
case of a failures&nbsp;&nbsp; restart-strategy:&nbsp;&nbsp;&nbsp;&nbsp; # 
strategy type&nbsp;&nbsp;&nbsp;&nbsp; # possible values are "fixed-delay", 
"failure-rate", "none", or "fallback" (default)&nbsp;&nbsp;&nbsp;&nbsp; type: 
fallback deployment:&nbsp;&nbsp; # general cluster communication timeout in 
ms&nbsp;&nbsp; response-timeout: 5000&nbsp;&nbsp; # (optional) address from 
cluster to gateway&nbsp;&nbsp; gateway-address: ""&nbsp;&nbsp; # (optional) 
port from cluster to gateway&nbsp;&nbsp; gateway-port: 0 
&gt;echo $HADOOP_CLASSPATH:
&gt;[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH 
/home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*
 
&gt;
&gt;
&gt;
&gt;????????????
&gt;
&gt;Best,
&gt;
&gt;MuChen.

回复