MuChen????????<br/>??????????????????????????????????HBase??zk??????meta??????Flink??????????Hbase
 
Source????????????????????????????????????????????zk????????????????????????<br/>??????????????????????<br/><br/>Best,<br/>Roc
 Marshal.
?? 2020-06-23 10:17:35??"MuChen" <9329...@qq.com> ??????
>Hi, All:
>
>
>??????????????????????flinksql????????hbase????????????????
>
>
>????????????????????????????
>
>
>
>????????hadoop??????master????????????flink??
>
>????????????????yarn-session:
>bin/yarn-session.sh -jm 1g -tm 4g -s 4 -qu root.flink -nm fsql-cli 2&gt;&amp;1 
>&amp; # ?????????????????????????????????? [admin@uhadoop-op3raf-master2 
>flink10]$ 2020-06-23 09:30:56,402 ERROR 
>org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - 
>Authentication failed 2020-06-23 09:30:56,515 ERROR 
>org.apache.flink.shaded.curator.org.apache.curator.ConnectionState  - 
>Authentication failed JobManager Web Interface: 
>http://uhadoop-op3raf-core24:42976 
>????????????sql-client:
>bin/sql-client.sh embedded 
>????????????????????hbase????????????flinksql??????????????
>#  CREATE TABLE hbase_video_pic_title_q70 (   key string,   cf1 ROW<vid 
>string, q70 string&gt; ) WITH (   'connector.type' = 'hbase',   
>'connector.version' = '1.4.3',   'connector.table-name' = 
>'hbase_video_pic_title_q70',   'connector.zookeeper.quorum' = 
>'uhadoop-op3raf-master1:2181,uhadoop-op3raf-master2:2181,uhadoop-op3raf-core1:2181',
>   'connector.zookeeper.znode.parent' = '/hbase',   
>'connector.write.buffer-flush.max-size' = '10mb',   
>'connector.write.buffer-flush.max-rows' = '1000',    
>'connector.write.buffer-flush.interval' = '2s' ); 
>??????????????????????
>select key from hbase_video_pic_title_q70; 
>??????????????????????HBase????????????
>[ERROR] Could not execute SQL statement. Reason: 
>org.apache.flink.runtime.rest.util.RestClientException: [Internal server 
>error., <Exception on server side: 
>org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.  
>       at 
>org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)
>         at 
>java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)   
>      at 
>java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>         at 
>java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)      
>   at 
>akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>         at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)  
>       at 
>akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)  
>       at 
>akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)         
>at 
>akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
>Caused by: java.lang.RuntimeException: 
>org.apache.flink.runtime.client.JobExecutionException: Could not set up 
>JobManager         at 
>org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
>         at 
>java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>         ... 6 more Caused by: 
>org.apache.flink.runtime.client.JobExecutionException: Could not set up 
>JobManager         at 
>org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:152)
>         at 
>org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
>         at 
>org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)
>         at 
>org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
>         ... 7 more Caused by: 
>org.apache.flink.runtime.client.JobExecutionException: Cannot initialize task 
>'Source: HBaseTableSource[schema=[key, cf1], projectFields=[0]] -&gt; 
>SourceConversion(table=[default_catalog.default_database.hbase_video_pic_title_q70,
> source: [HBaseTableSource[schema=[key, cf1], projectFields=[0]]]], 
>fields=[key]) -&gt; SinkConversionToTuple2 -&gt; Sink: SQL Client Stream 
>Collect Sink': Configuring the input format (null) failed: Cannot create 
>connection to HBase.         at 
>org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:216)
>         at 
>org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:255)
>         at 
>org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:227)
>         at 
>org.apache.flink.runtime.scheduler.SchedulerBase.<init&gt;(SchedulerBase.java:215)
>         at 
>org.apache.flink.runtime.scheduler.DefaultScheduler.<init&gt;(DefaultScheduler.java:120)
>         at 
>org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)
>         at 
>org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
>         at 
>org.apache.flink.runtime.jobmaster.JobMaster.<init&gt;(JobMaster.java:266)     
>    at 
>org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
>         at 
>org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
>         at 
>org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init&gt;(JobManagerRunnerImpl.java:146)
>         ... 10 more Caused by: java.lang.Exception: Configuring the input 
>format (null) failed: Cannot create connection to HBase.         at 
>org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:80)
>         at 
>org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:212)
>         ... 20 more Caused by: java.lang.RuntimeException: Cannot create 
>connection to HBase.         at 
>org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:103)
>         at 
>org.apache.flink.addons.hbase.HBaseRowInputFormat.configure(HBaseRowInputFormat.java:68)
>         at 
>org.apache.flink.runtime.jobgraph.InputOutputFormatVertex.initializeOnMaster(InputOutputFormatVertex.java:77)
>         ... 21 more Caused by: java.io.IOException: 
>java.lang.reflect.InvocationTargetException         at 
>org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>         at 
>org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>         at 
>org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>         at 
>org.apache.flink.addons.hbase.HBaseRowInputFormat.connectToTable(HBaseRowInputFormat.java:96)
>         ... 23 more Caused by: java.lang.reflect.InvocationTargetException    
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
>       at 
>sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
>sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)    
>     at 
>org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>         ... 26 more Caused by: java.lang.NoSuchFieldError: 
>HBASE_CLIENT_PREFETCH         at 
>org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:713)
>         at 
>org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init&gt;(ConnectionManager.java:652)
>         ... 31 more End of exception on server side&gt;] 
>flink??????1.10.0
>
>hbase??????1.2.0
>
>hbase??????????????hadoop????????
>
>hbase????hive????????????hbase????????????????????
># hive?? CREATE TABLE edw.hbase_video_pic_title_q70(     key string,     vid 
>string,      q70 string )  STORED BY 
>'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES 
>("hbase.columns.mapping" = ":key,cf1:vid,cf1:q70") TBLPROPERTIES 
>("hbase.table.name" = "hbase_video_pic_title_q70"); # 
>??????????hive????????????hbase INSERT OVERWRITE TABLE 
>edw.hbase_video_pic_title_q70 SELECT vid, vid, q70 FROM 
>dw.video_pic_title_q70; 
>flink/lib??????jar:
>flink-dist_2.11-1.10.0.jar flink-hbase_2.11-1.10.0.jar 
>flink-metrics-influxdb-1.10.0.jar flink-metrics-prometheus-1.10.0.jar 
>flink-shaded-hadoop-2-uber-2.7.5-7.0.jar flink-table_2.11-1.10.0.jar 
>flink-table-blink_2.11-1.10.0.jar hbase-client-1.2.0-cdh5.8.0.jar 
>hbase-common-1.2.0.jar hbase-protocol-1.2.0.jar jna-4.2.2.jar 
>jna-platform-4.2.2.jar log4j-1.2.17.jar oshi-core-3.4.0.jar 
>slf4j-log4j12-1.7.15.jar 
>????????flink-conf.yaml:
>jobmanager.rpc.address: localhost jobmanager.rpc.port: 6123 
>jobmanager.heap.size: 1024m taskmanager.memory.process.size: 1568m 
>taskmanager.numberOfTaskSlots: 1 parallelism.default: 1 high-availability: 
>zookeeper high-availability.storageDir: hdfs:///flink/ha/ 
>high-availability.zookeeper.quorum: 
>uhadoop-op3raf-master1,uhadoop-op3raf-master2,uhadoop-op3raf-core1 
>state.checkpoints.dir: hdfs:///flink/checkpoint state.savepoints.dir: 
>hdfs:///flink/flink-savepoints state.checkpoints.num-retained:60 
>state.backend.incremental: true jobmanager.execution.failover-strategy: region 
>jobmanager.archive.fs.dir: hdfs:///flink/flink-jobs/ historyserver.web.port: 
>8082 historyserver.archive.fs.dir: hdfs:///flink/flink-jobs/ 
>historyserver.archive.fs.refresh-interval: 10000 
>metrics.reporter.influxdb.class: 
>org.apache.flink.metrics.influxdb.InfluxdbReporter 
>metrics.reporter.influxdb.host: 10.42.63.116 metrics.reporter.influxdb.port: 
>8086 metrics.reporter.influxdb.db: flink metrics.reporter.influxdb.username: 
>flink metrics.reporter.influxdb.password: flink*** 
>metrics.reporter.promgateway.class: 
>org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 
>metrics.reporter.promgateway.host: 10.42.63.116 
>metrics.reporter.promgateway.port: 9091 metrics.reporter.promgateway.jobName: 
>tdflink_prom metrics.reporter.promgateway.randomJobNameSuffix: true 
>metrics.reporter.promgateway.deleteOnShutdown: true metrics.system-resource: 
>true yarn.application-attempts: 1 
>????????sql-client-defaults.yaml:
>tables: [] # empty list functions: [] # empty list catalogs: [] # empty list 
>execution:   # select the implementation responsible for planning table 
>programs   # possible values are 'blink' (used by default) or 'old'   planner: 
>blink   # 'batch' or 'streaming' execution   type: streaming   # allow 
>'event-time' or only 'processing-time' in sources   time-characteristic: 
>event-time   # interval in ms for emitting periodic watermarks   
>periodic-watermarks-interval: 200   # 'changelog' or 'table' presentation of 
>results   result-mode: table   # maximum number of maintained rows in 'table' 
>presentation of results   max-table-result-rows: 1000000   # parallelism of 
>the program   parallelism: 1   # maximum parallelism   max-parallelism: 128   
># minimum idle state retention in ms   min-idle-state-retention: 0   # maximum 
>idle state retention in ms   max-idle-state-retention: 0   # current catalog 
>('default_catalog' by default)   current-catalog: default_catalog   # current 
>database of the current catalog (default database of the catalog by default)   
>current-database: default_database   # controls how table programs are 
>restarted in case of a failures   restart-strategy:     # strategy type     # 
>possible values are "fixed-delay", "failure-rate", "none", or "fallback" 
>(default)     type: fallback deployment:   # general cluster communication 
>timeout in ms   response-timeout: 5000   # (optional) address from cluster to 
>gateway   gateway-address: ""   # (optional) port from cluster to gateway   
>gateway-port: 0 
>echo $HADOOP_CLASSPATH:
>[admin@uhadoop-op3raf-master2 flink10]$ echo $HADOOP_CLASSPATH 
>/home/hadoop/contrib/capacity-scheduler/*.jar:/home/hadoop/conf:/home/hadoop/share/hadoop/common/lib/*:/home/hadoop/share/hadoop/common/*:/home/hadoop/share/hadoop/hdfs:/home/hadoop/share/hadoop/hdfs/lib/*:/home/hadoop/share/hadoop/hdfs/*:/home/hadoop/share/hadoop/yarn/lib/*:/home/hadoop/share/hadoop/yarn/*:/home/hadoop/share/hadoop/mapreduce/lib/*:/home/hadoop/share/hadoop/mapreduce/*
> 
>
>
>
>????????????
>
>Best,
>
>MuChen.

回复