There may be more elegant ways to do this, but I'd think that you could just
ls all the directories specified in log.dirs in your server.properties file for
Kafka. You should see directories for each topicname-partitionnumber there.
Offhand it sounds to me like maybe something's evicting
Hi,
I keep getting the below exception when Kafka tries to connect to zookeeper and
zookeeper is momentarily not able to connect. After that, connection does not
restore unless we restart the servers.
This may be connected to this issue :
https://issues.apache.org/jira/browse/KAFKA-824 But I
I have a particular broker(version 0.8.2.1) in a cluster receiving about
15000 messages/second of around 100 bytes each (bytes-in / messages-in).
This broker has bursts of really high log flush latency p95s. The latency
sometimes goes to above 1.5 seconds from a steady state of < 20 ms.
Running
using mirror maker, i would like to write events received from one topic
and write them to another. for example, for events received on the
topic 'clicks,' i want to write them to 'mirrored.clicks' on my
destination cluster. is that possible?
thanks,
d
Unfortunately, in order to get a specific partition, you will need to use
the simple consumer API, which does not have consumer groups.
see here for details:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
On Tue, Sep 22, 2015 at 6:08 PM, Spandan Harithas
As far as I know, with a consumer group implementation you cannot pin consumers
to partitions. That logic is taken care of by the high level API on its own.
> On 23-Sep-2015, at 6:38 AM, Spandan Harithas Karamchedu
> wrote:
>
> Hi,
>
> We created a topic with 3
Hi,
We created a topic with 3 partitions and a replication factor of 3. We are
able to implement a consumer to get the data from a specific partition in a
topic but we are struck in implementing a consumer within a specified
consumer group to be mapped to single partition of a topic and get the
Thanks Steve. I followed your suggestion to get the topics.
What is weird is that the bad broker does not get any more traffic
(messages or bytes) when this happens. Also I have more than 2 G (out of
28G) free memory according to collectd and running vmstat on the box, so I
hope that things don't
I am new to Internet Of things. I have pushed temperature data to mosquito
server and successfully consumed all data. Now I want to push data from
arduino to kafka server and consume from kafka. Is there any kafka library
for arduino? What architecture will be suitable for scaling mqqt using
Ah, nice! Does not look like it is working, though. For some reason the
__consumer_offsets topic is still empty. I see there's a few debug(..) logging
messages that might get displayed if things go wrong - would you know how to
get those displayed? (Right now I'm just running as 'java -jar ...'
I'm trying to set up a kafka consumer (in Java) that uses the new approach of
committing offsets (i.e. in the __consumer_offsets topic etc, rather than
through zookeeper).
Am I correct in believing that the current version (we're using kafka 8.2.1)
does not expose this through the high level
0.8.2.1 already supports Kafka offset storage. You can
set offsets.storage=kafka in consumer properties and high level API is able
to pick it up and commit offsets to Kafka
Here is the code reference where kafka offset logic kicks in
If you are using the console consumer to check the offsets topic, remember
that you need this line in consumer.properties:
exclude.internal.topics=false
On Tue, Sep 22, 2015 at 6:05 AM Joris Peeters
wrote:
> Ah, nice! Does not look like it is working, though. For
Yep, that was it ...
Everything works now. And the only thing that didn't, earlier, was my head.
Thanks all!
-Joris.
-Original Message-
From: noah [mailto:iamn...@gmail.com]
Sent: 22 September 2015 12:17
To: users@kafka.apache.org
Subject: Re: committing offsets
If you are using the
All of the information Todd posted is important to know. There was also
jira related to this that has been committed trunk:
https://issues.apache.org/jira/browse/KAFKA-2436
Before that patch, log.retention.hours was used to calculate
KafkaConfig.logRetentionTimeMillis. But it was not used in
One caveat. If you are relying on log.segment.ms to roll the current log
segment, it will not roll until the both time elapses and something new
arrives for the log.
In other words, if your topic/log segment are idle, no rolling will happen.
The theoretically ineligible log will still be the
16 matches
Mail list logo