MG>quoting stackoverflow below

"You can use an ELB as the bootstrap.servers,
The ELB will be used for the initial metadata request the client makes to 
figure out which topic partitions are on which brokers,
but after (the initial metadata request)
the brokers still need to be directly accessible to the client.
that it'll use the hostname of the server (or advertised.listeners setting if 
you need to customize it,
which, e.g. might be necessary on EC2 instances to get the public IP of a 
server).
If you were trying to use an ELB to make a Kafka cluster publicly available,
you'd need to make sure the advertised.listeners for each broker also makes it 
publicly accessible. "

MG> initial metadata request you will see elb
MG>after metadata request and topic partition locations are determined
MG>elb drops out and client will talk to directly to broker
MG>use healthcheck algorithm to determine assigned server/port assigned to 
broker from /brokers/id/$id
MG>echo dump | nc localhost 2181 | grep brokers


https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster

[https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-i...@2.png?v=73d79a89bded]<https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster>

Does Kafka support ELB in front of broker cluster? - Stack 
...<https://stackoverflow.com/questions/38666795/does-kafka-support-elb-in-front-of-broker-cluster>
stackoverflow.com
I had a question regarding Kafka broker clusters on AWS. Right now there is an 
AWS ELB sitting in front of the cluster, but when I set the "bootstrap.servers" 
property of my producer or consumer to...


does this help?

Martin
______________________________________________




________________________________
From: Tyler Monahan <tjmonah...@gmail.com>
Sent: Thursday, June 21, 2018 6:17 PM
To: users@kafka.apache.org
Subject: Configuring Kerberos behind an ELB

Hello,

I have setup kafka using kerberos successfully however if I try and reach
kafka through an elb the kerberos authentication fails. The kafka brokers
are each using their unique hostname for kerberos and when going through an
elb the consumer/producer only sees the elb's dns record which doesn't have
kerberos setup for it causing auth to fail. If I try to setup a service
principle name for that dns record I can only associate it with one of the
brokers behind the elb so the other ones fail.

I have tried setting up a service account and having the kafka brokers use
that which works when a consumer/producer is talking to the instances
through the elb however inter broker communication which is also over
kerberos fails at that point because it is going directly to the other
nodes instead of through the elb. I am not sure where to go from here as
there doesn't appear to be a way to configure the inter broker
communication to work differently then the incoming consumer communication
short of getting rid of kerberos.

Any advice would be greatly appreciated.

Tyler Monahan

Reply via email to