Accessing kafka broker when it is running inside an image causes people a lot 
of problems, sounds like this may be your problem

Basically, it's all about configuring two zookeeper listeners, one for inside 
the docker network and an external one for your PC

This is the best blog I have found that explains the issue and how to fix,

https://rmoff.net/2018/08/02/kafka-listeners-explained/
[https://rmoff.net/images/2018/08/IMG_4351.jpg]<https://rmoff.net/2018/08/02/kafka-listeners-explained/>
Kafka Listeners - Explained - 
rmoff<https://rmoff.net/2018/08/02/kafka-listeners-explained/>
This question comes up on StackOverflow and such places a lot, so here’s 
something to try and help.. tl;dr: You need to set advertised.listeners (or 
KAFKA_ADVERTISED_LISTENERS if you’re using Docker images) to the external 
address (host/IP) so that clients can correctly connect to it. Otherwise 
they’ll try to connect to the internal host address–and if that’s not reachable 
then ...
rmoff.net

________________________________
From: Romain Rigaux <romain.rig...@gmail.com>
Sent: 15 November 2020 8:19 AM
To: user@phoenix.apache.org <user@phoenix.apache.org>
Subject: Status of Apache Kafka Plugin?

Hello,

I am trying to have a setup as simple as possible for demoin the Hue SQL Editor 
with Phoenix.

https://phoenix.apache.org/kafka.html

I looked at demoing via live data from a Kafka topic being indexed in live into 
HBase. I am trying to run the PhoenixConsumerTool into this docker image which 
is simple:

https://hub.docker.com/r/boostport/hbase-phoenix-all-in-one

HBase is in non distributed mode:

/opt/hbase/bin/hbase --config "$HBASE_CONF_DIR" 
org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
false

But the main issues are:

  1.  I can't find a ZooKeeper address
  2.  All the kafka-consumer-json.properties config properties seem ignored by 
PhoenixConsumerTool

2020-11-14 15:50:37,863 INFO zookeeper.ClientCnxn: Socket error occurred: 
localhost/127.0.0.1:2181<http://127.0.0.1:2181/>: Connection refused
2020-11-14 15:50:37,964 WARN zookeeper.ReadOnlyZKClient: 0x1ecee32c to 
localhost:2181 failed for get of /hbase/master, code = CONNECTIONLOSS, retries 
= 10


2020-11-14 15:49:19,227 WARN consumer.ConsumerConfig: The configuration 
zookeeperQuorum = 8370ebd5fde0:2181 was supplied but isn't a known config.
2020-11-14 15:49:19,227 WARN consumer.ConsumerConfig: The configuration topics 
= topic1,topic2 was supplied but isn't a known config.
2020-11-14 15:49:19,227 WARN consumer.ConsumerConfig: The configuration 
serializer.rowkeyType = uuid was supplied but isn't a known config.
2020-11-14 15:49:19,228 WARN consumer.ConsumerConfig: The configuration 
serializer = json was supplied but isn't a known config.
2020-11-14 15:49:19,228 WARN consumer.ConsumerConfig: The configuration ddl = 
CREATE TABLE IF NOT EXISTS SAMPLE2(uid VARCHAR NOT NULL,c1 VARCHAR,c2 
VARCHAR,c3 VARCHAR CONSTRAINT pk PRIMARY KEY(uid)) was supplied but isn't a 
known config.

The command I use:
HADOOP_CLASSPATH=$(/opt/hbase/bin/hbase classpath):/opt/hbase/conf 
hadoop-3.2.1/bin/hadoop jar phoenix-kafka-5.0.0-HBase-2.0-minimal.jar 
org.apache.phoenix.kafka.consumer.PhoenixConsumerTool -Dfs.defaultFS=file:/// 
--file kafka-consumer-json.properties

Would you have some tips on how to debug this better?
Are you aware of an even simpler solution for ingesting live data into HBase?

Thanks!

Romain

Reply via email to