One thing to note is that we do support controlled shutdown as part of the
regular shutdown hook in the broker. The wiki was not very clear w.r.t
this and I have updated it to convey this. You can turn on controlled
shutdown by setting controlled.shutdown.enable to true in kafka config.
This will
Hi all,
I have a Kafka 0.8 cluster of two nodes on same machine with 4 partitions and
communicating through a single zookeeper.
I am producing data using the Kafka Producer using the following code:
code
KeyedMessageString, byte[] data = new KeyedMessageString, byte[](topic,
input);
Hi Jun,
I did put in only one topic while starting the consumer and have used the same
API createMessageStreams.
As for the trace level logs of kafka consumer, we will send that to you soon.
Thanks again for replying.
Nihit
On 10-Jul-2013, at 10:38 PM, Jun Rao jun...@gmail.com wrote:
Also,
Thanks Jun, done. I've created KAFKA-972 issue for that.
Regards
On Thu, Jul 11, 2013 at 1:16 AM, Jun Rao jun...@gmail.com wrote:
That's actually not expected. We should only return live brokers to the
client. It seems that we never clear the live broker cache in the brokers.
This is a bug.
Yes.
Thanks,
Jun
On Wed, Jul 10, 2013 at 10:55 PM, Ryan Chan ryanchan...@gmail.com wrote:
We are already using zk.connect to connect zookeeper and registered
multiple brokers (same topic/partitions), so when a consumer request ZK, is
load balancing already done?
Thanks
The consumer iterator by default blocks if there are no new messages. You
can configure it to be non-blocking. See consumer.timeout.ms
http://kafka.apache.org/08/configuration.html
Thanks,
Jun
On Wed, Jul 10, 2013 at 11:55 PM, Ankit Jain ankitm.j...@impetus.co.inwrote:
Hi all,
I have a
Hm... the cache may explain some odd behavior I was seeing in our
cluster yesterday.
The zookeeper information for which nodes were In Sync Replicas was
different that the data I received from a metadata request response.
Zookeeper said two nodes were ISR and the metadata response said only
the
We need to improve how the metadata caching works in kafka. Currently, we
have multiple places where we send the updated metadata to the individual
brokers from the controller when the state of the metadata changes. This
is hard to track. What we need to implement is to let the metadata
structure
Hi
We have integrated kafka consumer and producer into our java application.
We've noticed some issues when loading classes. And it seems it is caused by
different JDK versions. So I wonder which version of JDK is recommended
for developing kafka clients.
Regards,
Libo
Thank you, Jay.
When talking about flush rates, I think you mean the opposite of what was
said here:
However very high application flush rates can lead to high latency when
the flush does occur.
should be
However very low application flush rates (infrequent flushes) can lead to
high latency
Hi all,
I was wondering if anybody here has and was willing to share experience
about designing and operating complex multi-datacenter/multi-cluster
Kafka deployments in which data must flow from and to several distinct
Kafka clusters with more complex semantics than what MirrorMaker
provides.
Hi,
So, is it possible to configure the weighting? I believe this need to be
done on the zookeeper side, can you give me some hints?
Thanks.
On Thu, Jul 11, 2013 at 11:50 PM, Jun Rao jun...@gmail.com wrote:
Yes.
Thanks,
Jun
On Wed, Jul 10, 2013 at 10:55 PM, Ryan Chan
We we have at LinkedIn is an extra aggregate cluster per data center. We
use mirror maker to copy data from the local cluster in each of the data
centers to the aggregate one.
Thanks,
Jun
On Thu, Jul 11, 2013 at 5:18 PM, Maxime Petazzoni maxime.petazz...@turn.com
wrote:
Hi all,
I was
Currently, the balancing logic in the high level consumer is not
configurable. There is a low level SimpleConsumer that you can use to gain
more control, but it needs more coding.
Thanks,
Jun
On Thu, Jul 11, 2013 at 9:05 PM, Ryan Chan ryanchan...@gmail.com wrote:
Hi,
So, is it possible to
14 matches
Mail list logo