Hi,
Is there documentation on how connection management, connection pooling
works in Kafka for multiple brokers? Do I need to take care of my own
connection management for cluster? e.g. while publishing messages, if one
broker stops responding, I need to switch to the other. Does Kafka client
take
Hi,
I'm not sure if anyone can give you exact numbers, but I think you'll find
Kafka brokers very light. We run them on I think medium EC2 instances
and Kafka brokers they put hardly any strain on them. If it helps give you
an idea, go to https://apps.sematext.com/demo and look for "SA.Prod
Yes. True. Thanks for your help.
> Date: Tue, 11 Mar 2014 11:48:20 -0700
> Subject: Re: Remote Zookeeper
> From: b...@b3k.us
> To: users@kafka.apache.org
>
> If they are on different physical machines then binding to localhost/using
> localhost as the host name is unlikely to be what you want.
>
If they are on different physical machines then binding to localhost/using
localhost as the host name is unlikely to be what you want.
On Tuesday, March 11, 2014, A A wrote:
> Thanks. I already checked out the wiki and step 6 in particular.
> Just to clarify, Zk, Broker1 and Broker2 are on 3 dif
I have now added host.name property for Broker1 and Broker2 and it works.
Maybe someone can modify the wiki and add this for Step 6?
Also, I would like to mention, as previous replied to my thread; I didn't
HAVE_TO add all the brokers to the brokers-list
$KAFKA_HOME/bin/kafka-console-producer.sh
Thanks. I already checked out the wiki and step 6 in particular.
Just to clarify, Zk, Broker1 and Broker2 are on 3 different physical machines.
> From: b...@b3k.us
> Date: Tue, 11 Mar 2014 10:58:50 -0700
> Subject: Re: Remote Zookeeper
> To: users@kafka.apache.org
>
> "I'd suggest deleting the z
"I'd suggest deleting the zookeeper and Kafka logs and starting over using
the getting started tutorial from the wiki."
|
v
"Could this be an issue?
INFO Registered broker 2 at path /brokers/ids/2 with address localhost:9092.
INFO Registered broker 1 at path /brokers/ids/1 with address
loca
high volume query output streams will overwhelm browsers, as you've
experienced. best practice is don't do that.
On Tue, Mar 11, 2014 at 10:53 AM, Pierre Andrews wrote:
> looks great!
>
> I haven't tried it yet, but had made experiments with websockets and kafka
> sometime ago and found that the
Yeah that is very cool. I added it to the ecosystem page:
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
-Jay
On Tue, Mar 11, 2014 at 10:48 AM, Benjamin Black wrote:
> exactly. i'm using it for streaming query output to dashboards.
>
>
> On Tue, Mar 11, 2014 at 10:44 AM, Jay Kreps
yes, each session spawns a new consumer if needed. a resource heavy but
wildly simplifying assumption.
On Tue, Mar 11, 2014 at 10:51 AM, Joe Stein wrote:
> Cool! Is every user's dashboard another consumer group?
>
>
> /***
> Joe Stein
> Founder, Princi
looks great!
I haven't tried it yet, but had made experiments with websockets and kafka
sometime ago and found that the browsers (at least chrome and firefox) had
issues keeping up with the throughput of my queues and would eventually
freeze when too much data was coming in too fast.
Have you had
Okay thanks. I removed zookeeper and kafka logs
/tmp/kafka-logs
/tmp/logs
/var/zookeeper/version-2/
/tmp/zookeeper*.log (on Zk)
restarted zookeeper
$KAFKA_HOME/bin/kafka-list-topic.sh --zookeeper 192.168.1.120:2181
no topics exist!
Started Broker 2
$KAFKA/kafka-server-start.sh $KAFKA_CONFIG/ser
Cool! Is every user's dashboard another consumer group?
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
> On Mar 11, 2014, at
exactly. i'm using it for streaming query output to dashboards.
On Tue, Mar 11, 2014 at 10:44 AM, Jay Kreps wrote:
> Very cool. If I understand correctly this is a kind of proxy that would
> connect web browsers to Kafka? Any information you could give on the use
> cases this is for?
>
> -Jay
>
Very cool. If I understand correctly this is a kind of proxy that would
connect web browsers to Kafka? Any information you could give on the use
cases this is for?
-Jay
On Mon, Mar 10, 2014 at 11:02 AM, Benjamin Black wrote:
> I put this up over the weekend, thought it might be useful to folks
Seems you my have put your cluster in a very confused state with random
addition and removal of brokers and topics. I'd suggest deleting the
zookeeper and Kafka logs and starting over using the getting started
tutorial from the wiki.
On Tuesday, March 11, 2014, A A wrote:
> I noticed a problem w
I noticed a problem with my topics. The previous two topics would only be
present on 1 broker. So I created a new topic
$KAFKA_HOME/bin/kafka-create-topic.sh --zookeeper 192.168.1.120:2181 --replica
2 --partition 1 --topic test-rep
$KAFKA_HOME/bin/kafka-list-topic.sh --zookeeper 192.168.1.120:2
Is there a firewall thats blocking connections on port 9092? Also, the
broker list should be comma separated.
On Tue, Mar 11, 2014 at 9:02 AM, A A wrote:
> Sorry one of the brokers for was down. Brought it back up. Tried the
> following
>
> $KAFKA_HOME/bin/kafka-console-producer.sh --broker-li
Sorry one of the brokers for was down. Brought it back up. Tried the following
$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092
192.168.1.124:9092 --topic test
hello brokers
[2014-03-11 10:16:55,547] WARN Error while fetching metadata [{TopicMetadata
for topic test ->
No
Okay thanks. Just to verify my setup I tried the following on broker1 (by
publishing to localhost)
$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic
test
test message
this is a test message
another test message
$KAFKA_HOME/bin/kafka-console-consumer.sh --zookeeper
That's not how Kafka works. You need to pass the full list of brokers.
On Tuesday, March 11, 2014, A A wrote:
> Hi again.
>
> Got the setup working. I now have 2 brokers (broker 1 and broker 2) with
> one remote zk. I was also able to create some topics
>
> $KAFKA_HOME/bin/kafka-list-topic.sh --
Hi again.
Got the setup working. I now have 2 brokers (broker 1 and broker 2) with one
remote zk. I was also able to create some topics
$KAFKA_HOME/bin/kafka-list-topic.sh --zookeeper 192.168.1.120:2181
topic: test partition: 0leader: 1 replicas: 1 isr: 1
topic: test1partit
22 matches
Mail list logo