side logs: Marking the coordinator ip-XYZ:9092 (id: 2147482644
rack: null) dead for group MyGroup
Does anyone know what is the source of this issue?
I played with CONNECTIONS_MAX_IDLE_MS_CONFIG in Consumer side kafka
configuration and it didn't effect the results.
best,
Shahab
Here
Does anyone know what is the source of this issue?
I played with CONNECTIONS_MAX_IDLE_MS_CONFIG in Consumer side kafka
configuration and it didn't effect the results.
best,
Shahab
Here is the related logs I found in consumer side:
2016-12-08 20:41:12.559 INFO internals.AbstractCoordinator
if there are input records.
>
> Not sure why punctuate() is not triggered as you say that you do have
> arriving data.
>
> Can you share your code?
>
>
>
> -Matthias
>
>
> On 11/23/16 4:48 AM, shahab wrote:
> > Hell
coming to the topology (as I have logged the incoming
tuples to process() ), punctuate() is never executed.
What I am missing?
best,
Shahab
n.StringDeserializer");
Map<String, List> topics = new KafkaConsumer<>(props
).listTopics();
System.out.println (topics);
best,
Shahab
Thanks Noah. I installed Burrow and played with it a little bit. It seems
as you pointed out I need to implement the alerting system myself. Do you
know any other Kafka tools that can give alerts?
best,
/Shahab
On Wed, Sep 2, 2015 at 1:44 PM, noah <iamn...@gmail.com> wrote:
> We u
Hi,
I wonder how we can monitor lag (difference between consumer offset and log
) when "kafka" is set as offset.storage? because the "kafka-run-class.sh
kafka.tools.ConsumerOffsetChecker ... " does work only when zookeeper is
used as storage manager.
best,
/Shahab
Hi,
I do appreciate if someone point me to any java example showing how one
can implement offset commit using Simple Consumer API? I have not found
any !
best,
/Shahab
Hi,
I have a kafka cluster consisting of two servers. I created a topic XYZ
with 3 partitions and replication factor of 2.
In the producer side, the producer is configured with broker list of both
brokers broker0 and broker1.
Topic:XYZ PartitionCount:3 ReplicationFactor:2 Configs:
Topic:
Thanks Ewen for the clarification. I will test this.
best,
/Shahab
On Mon, Aug 10, 2015 at 9:03 PM, Ewen Cheslack-Postava e...@confluent.io
wrote:
You can use SimpleConsumer.getOffsetsBefore to get a list of offsets before
a Unix timestamp. However, this isn't per-message. The offests
it is not in sync with leader and in
fact it never became in sync again.
Now question is how to make first broker in sync again so it appears both
in isr list and also it becomes leader for one of the partitions?
best,
/Shahab
but it
did not change.
Does anyone know how to resolve this?
best,
/Shahab
I just wonder if it is possible to read as batch using SimpleConsumer
instead of HighLevel consumer? does same principle apply to low level
consumer (i.e. SimpleConsumer)?
best,
/Shahab
On Tue, Aug 4, 2015 at 9:10 PM, Gwen Shapira g...@confluent.io wrote:
To add some internals, the high level
. for example,
read 100 items at once!
Is this correct observation? or I am missing something?
best,
/Shahab
Thanks a lot Shaminder for clarification and thanks Raja for pointing me to
the example.
best,
/shahab
On Tue, Aug 4, 2015 at 6:06 PM, Rajasekar Elango rela...@salesforce.com
wrote:
Here is an example on what sharninder suggested
http://ingest.tips/2014/10/12/kafka-high-level-consumer
Thanks a lot Guozhang. Very helpful comment.
best,
/Shahab
On Wed, Feb 19, 2014 at 5:46 PM, Guozhang Wang wangg...@gmail.com wrote:
Group management like load balancing only exists in high level consumers,
SimpleConsumer do not have the group id settings since it does not have
group
. clientName)
Maybe I did something wrong, but I ran two consumers with same clientName
and still both consumers received exactly same amount and same data from
Kafka while it is supposed that the data is divided between these two
consumers (due to load balancing)!
best,
/Shahab
Thanks Jun,
I already set the retention policy to 1 hour, and size 20 10 M but it
didn't work, still logs are piled at logs/ folder. Maybe I am missing
something .
best,
/Shahab
On Thu, Dec 12, 2013 at 4:57 PM, Jun Rao jun...@gmail.com wrote:
Log deletion is controlled a retention policy
Hi
I just wonder why the log files, in {kafka_path}/log , are not deleted
automatically?
Is there any way to purge those files?
Also is there anyway to purge the Kafka queue (make it empty) without
having to consuming or knowing the last fetched offset?
best,
/Shahab
Thanks a lot, very good hints. I am trying to see what happened in my case.
best,
/Shahab
On Wed, Dec 11, 2013 at 5:16 PM, Jun Rao jun...@gmail.com wrote:
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F
?
Thanks,
Jun
with fetching (consumer) part, right?
best,
/Shahab
The kafka is run in one machine, no clusters, replications,etc, very
basic configuration.
The consumer config file is ;
zookeeper.connect, myserver:2181);
group.id, group1);
zookeeper.session.timeout.ms, 400);
zookeeper.sync.time.ms, 200
with fetching (consumer) part, right?
best,
/Shahab
The kafka is run in one machine, no clusters, replications,etc, very
basic configuration.
The consumer config file is ;
zookeeper.connect, myserver:2181);
group.id, group1);
zookeeper.session.timeout.ms, 400);
zookeeper.sync.time.ms, 200
22 matches
Mail list logo