flink version: 1.10.0 hbase version: 2.1.0
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/Zach/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.4.1/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/Zach/.m2/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Exception in thread "main" org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSinkFactory' in the classpath. Reason: Required context properties mismatch. The matching candidates: org.apache.flink.addons.hbase.HBaseTableFactory Mismatched properties: 'connector.version' expects '1.4.3', but is '2.1.0' The following properties are requested: connector.table-name=user_hbase connector.type=hbase connector.version=2.1.0 connector.write.buffer-flush.interval=2s connector.write.buffer-flush.max-rows=1000 connector.write.buffer-flush.max-size=10mb connector.zookeeper.quorum=cdh1:2181,cdh2:2181,cdh3:2181 connector.zookeeper.znode.parent=/hbase schema.0.data-type=VARCHAR(2147483647) schema.0.name=rowkey schema.1.data-type=ROW<`sex` VARCHAR(2147483647), `age` INT, `created_time` TIMESTAMP(3)> schema.1.name=cf The following factories have been considered: org.apache.flink.api.java.io.jdbc.JDBCTableSourceSinkFactory org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory org.apache.flink.table.sinks.CsvBatchTableSinkFactory org.apache.flink.table.sinks.CsvAppendTableSinkFactory org.apache.flink.addons.hbase.HBaseTableFactory at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322) at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190) at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143) at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96) at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:310) at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:190) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682) at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:495) at org.rabbit.sql.FromKafkaSinkHbase$.main(FromKafkaSinkHbase.scala:61) at org.rabbit.sql.FromKafkaSinkHbase.main(FromKafkaSinkHbase.scala) Query: streamTableEnv.sqlUpdate( """ | |CREATE TABLE user_hbase( | rowkey string, | cf ROW(sex VARCHAR, age INT, created_time TIMESTAMP(3)) |) WITH ( | 'connector.type' = 'hbase', | 'connector.version' = '2.1.0', | 'connector.table-name' = 'user_hbase', | 'connector.zookeeper.quorum' = 'cdh1:2181,cdh2:2181,cdh3:2181', | 'connector.zookeeper.znode.parent' = '/hbase', | 'connector.write.buffer-flush.max-size' = '10mb', | 'connector.write.buffer-flush.max-rows' = '1000', | 'connector.write.buffer-flush.interval' = '2s' |) |""".stripMargin)
